Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
AEM Educ Train ; 5(3): e10601, 2021 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-34141997

RESUMEN

BACKGROUND: Free Open-Access Medical education (FOAM) use among residents continues to rise. However, it often lacks quality assurance processes and residents receive little guidance on quality assessment. The Academic Life in Emergency Medicine Approved Instructional Resources tool (AAT) was created for FOAM appraisal by and for expert educators and has demonstrated validity in this context. It has yet to be evaluated in other populations. OBJECTIVES: We assessed the AAT's usability in a diverse population of practicing emergency medicine (EM) physicians, residents, and medical students; solicited feedback; and developed a revised tool. METHODS: As part of the Medical Education Translational Resources: Impact and Quality (METRIQ) study, we recruited medical students, EM residents, and EM attendings to evaluate five FOAM posts with the AAT and provide quantitative and qualitative feedback via an online survey. Two independent analysts performed a qualitative thematic analysis with discrepancies resolved through discussion and negotiated consensus. This analysis informed development of an initial revised AAT, which was then further refined after pilot testing among the author group. The final tool was reassessed for reliability. RESULTS: Of 330 recruited international participants, 309 completed all ratings. The Best Evidence in Emergency Medicine (BEEM) score was the component most frequently reported as difficult to use. Several themes emerged from the qualitative analysis: for ease of use-understandable, logically structured, concise, and aligned with educational value. Limitations include deviation from questionnaire best practices, validity concerns, and challenges assessing evidence-based medicine. Themes supporting its use include evaluative utility and usability. The author group pilot tested the initial revised AAT, revealing a total score average measure intraclass correlation coefficient (ICC) of moderate reliability (ICC = 0.68, 95% confidence interval [CI] = 0 to 0.962). The final AAT's average measure ICC was 0.88 (95% CI = 0.77 to 0.95). CONCLUSIONS: We developed the final revised AAT from usability feedback. The new score has significantly increased usability, but will need to be reassessed for reliability in a broad population.

2.
AEM Educ Train ; 3(4): 387-392, 2019 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-31637356

RESUMEN

BACKGROUND: With the rapid proliferation of online medical education resources, quality evaluation is increasingly critical. The Medical Education Translational Resources: Impact and Quality (METRIQ) study evaluated the METRIQ-8 quality assessment instrument for blogs and collected feedback to improve it. METHODS: As part of the larger METRIQ study, participants rated the quality of five blog posts on clinical emergency medicine topics using the eight-item METRIQ-8 score. Next, participants used a 7-point Likert scale and free-text comments to evaluate the METRIQ-8 score on ease of use, clarity of items, and likelihood of recommending it to others. Descriptive statistics were calculated and comments were thematically analyzed to guide the development of a revised METRIQ (rMETRIQ) score. RESULTS: A total of 309 emergency medicine attendings, residents, and medical students completed the survey. The majority of participants felt the METRIQ-8 score was easy to use (mean ± SD = 2.7 ± 1.1 out of 7, with 1 indicating strong agreement) and would recommend it to others (2.7 ± 1.3 out of 7, with 1 indicating strong agreement). The thematic analysis suggested clarifying ambiguous questions, shortening the 7-point scale, specifying scoring anchors for the questions, eliminating the "unsure" option, and grouping-related questions. This analysis guided changes that resulted in the rMETRIQ score. CONCLUSION: Feedback on the METRIQ-8 score contributed to the development of the rMETRIQ score, which has improved clarity and usability. Further validity evidence on the rMETRIQ score is required.

3.
Ann Emerg Med ; 70(3): 394-401, 2017 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-28262317

RESUMEN

STUDY OBJECTIVE: Open educational resources such as blogs are increasingly used for medical education. Gestalt is generally the evaluation method used for these resources; however, little information has been published on it. We aim to evaluate the reliability of gestalt in the assessment of emergency medicine blogs. METHODS: We identified 60 English-language emergency medicine Web sites that posted clinically oriented blogs between January 1, 2016, and February 24, 2016. Ten Web sites were selected with a random-number generator. Medical students, emergency medicine residents, and emergency medicine attending physicians evaluated the 2 most recent clinical blog posts from each site for quality, using a 7-point Likert scale. The mean gestalt scores of each blog post were compared between groups with Pearson's correlations. Single and average measure intraclass correlation coefficients were calculated within groups. A generalizability study evaluated variance within gestalt and a decision study calculated the number of raters required to reliably (>0.8) estimate quality. RESULTS: One hundred twenty-one medical students, 88 residents, and 100 attending physicians (93.6% of enrolled participants) evaluated all 20 blog posts. Single-measure intraclass correlation coefficients within groups were fair to poor (0.36 to 0.40). Average-measure intraclass correlation coefficients were more reliable (0.811 to 0.840). Mean gestalt ratings by attending physicians correlated strongly with those by medical students (r=0.92) and residents (r=0.99). The generalizability coefficient was 0.91 for the complete data set. The decision study found that 42 gestalt ratings were required to reliably evaluate quality (>0.8). CONCLUSION: The mean gestalt quality ratings of blog posts between medical students, residents, and attending physicians correlate strongly, but individual ratings are unreliable. With sufficient raters, mean gestalt ratings provide a community standard for assessment.


Asunto(s)
Blogging/normas , Educación Médica/normas , Evaluación Educacional/métodos , Medicina de Emergencia/educación , Teoría Gestáltica , Adulto , Blogging/tendencias , Competencia Clínica , Educación Médica/métodos , Femenino , Humanos , Internado y Residencia , Masculino , Reproducibilidad de los Resultados , Medios de Comunicación Sociales/estadística & datos numéricos , Estudiantes de Medicina
4.
Perspect Med Educ ; 6(2): 91-98, 2017 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-28243948

RESUMEN

PURPOSE: Online open educational resources are increasingly used in medical education, particularly blogs and podcasts. However, it is unclear whether these resources can be adequately appraised by end-users. Our goal was to determine whether gestalt-based recommendations are sufficient for emergency medicine trainees and attending physicians to reliably recommend online educational resources to others. METHODS: Raters (33 trainees and 21 attendings in emergency medicine from North America) were asked to rate 40 blog posts according to whether, based on their gestalt, they would recommend the resource to (1) a trainee or (2) an attending physician. The ratings' reliability was assessed using intraclass correlation coefficients (ICC). Associations between groups' mean scores were assessed using Pearson's r. A repeated measures analysis of variance (RM-ANOVA) was completed to determine the effect of the level of training on gestalt recommendation scale (i. e. trainee vs. attending). RESULTS: Trainees demonstrated poor reliability when recommending resources for other trainees (ICC = 0.21, 95% CI 0.13-0.39) and attendings (ICC = 0.16, 95% CI = 0.09-0.30). Similarly, attendings had poor reliability when recommending resources for trainees (ICC = 0.27, 95% CI 0.18-0.41) and other attendings (ICC = 0.22, 95% CI 0.14-0.35). There were moderate correlations between the mean scores for each blog post when either trainees or attendings considered the same target audience. The RM-ANOVA also corroborated that there is a main effect of the proposed target audience on the ratings by both trainees and attendings. CONCLUSIONS: A gestalt-based rating system is not sufficiently reliable when recommending online educational resources to trainees and attendings. Trainees' gestalt ratings for recommending resources for both groups were especially unreliable. Our findings suggest the need for structured rating systems to rate online educational resources.

5.
West J Emerg Med ; 17(5): 574-84, 2016 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-27625722

RESUMEN

INTRODUCTION: Online education resources (OERs), like blogs and podcasts, increasingly augment or replace traditional medical education resources such as textbooks and lectures. Trainees' ability to evaluate these resources is poor, and few quality assessment aids have been developed to assist them. This study aimed to derive a quality evaluation instrument for this purpose. METHODS: We used a three-phase methodology. In Phase 1, a previously derived list of 151 OER quality indicators was reduced to 13 items using data from published consensus-building studies (of medical educators, expert podcasters, and expert bloggers) and subsequent evaluation by our team. In Phase 2, these 13 items were converted to seven-point Likert scales used by trainee raters (n=40) to evaluate 39 OERs. The reliability and usability of these 13 rating items was determined using responses from trainee raters, and top items were used to create two OER quality evaluation instruments. In Phase 3, these instruments were compared to an external certification process (the ALiEM AIR certification) and the gestalt evaluation of the same 39 blog posts by 20 faculty educators. RESULTS: Two quality-evaluation instruments were derived with fair inter-rater reliability: the METRIQ-8 Score (Inter class correlation coefficient [ICC]=0.30, p<0.001) and the METRIQ-5 Score (ICC=0.22, p<0.001). Both scores, when calculated using the derivation data, correlated with educator gestalt (Pearson's r=0.35, p=0.03 and r=0.41, p<0.01, respectively) and were related to increased odds of receiving an ALiEM AIR certification (odds ratio=1.28, p=0.03; OR=1.5, p=0.004, respectively). CONCLUSION: Two novel scoring instruments with adequate psychometric properties were derived to assist trainees in evaluating OER quality and correlated favourably with gestalt ratings of online educational resources by faculty educators. Further testing is needed to ensure these instruments are accurate when applied by trainees.


Asunto(s)
Competencia Clínica , Curriculum/tendencias , Educación Médica/normas , Reproducibilidad de los Resultados , Humanos , Internet/tendencias , Internado y Residencia , Medios de Comunicación Sociales/estadística & datos numéricos , Encuestas y Cuestionarios
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...